home *** CD-ROM | disk | FTP | other *** search
Text File | 1993-06-25 | 60.3 KB | 1,302 lines |
- ________________________ Subj: Centroid of a Polygon ________________________
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 173393
- To: John M. Dlugosz 70007,4657 (X) Date: 31-May-92 17:42:31
-
- The centroid is the point at which the polygon would balance were it a flat
- plate with evenly distributed weight. It's apparently fairly common to base
- priority sorts on the centroid rather than maximum and minimum z, though very
- few graphics books discuss it. (Abrash's does, but only briefly.) I'm finally
- beginning to understand why the centroid is used -- it produces the best
- first approximation of any method of priority sort -- and want to switch my
- hidden surface routines to use it, but don't want to sit here and figure it
- out by eyeball guess on every polygon.
-
- Maybe if I sit down and think about it, I can come up with an algorithm (but
- I'm not going to be money on it.) What was the "other graphics book" that
- mentioned it?
-
- --Chris
- ...........................................................................
-
- Fm: Everett Kaser (Sherlock) 70673,1547 # 173494
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 31-May-92 22:15:06
-
- Well, I'm not a mathematician, and I haven't gone to the extent of designing
- an algorithm or writing any code (wouldn't want to take the fun away from
- you! <g>), but putting a little of my old geometry together with a little of
- my college physics, here's how it seems to me:
-
- You CAN find the centroid of a triangle by drawing two of the triangle's
- medians (the lines that bisect the angles of the triangle), since all three
- medians of a triangle ALWAYS intersect together at the centroid.
-
- So, let's say that you pick a vertex of your convex polygon and draw a line
- from it to every other vertex of the polygon except for its neighbors on
- either side. This divides the polygon up into a bunch of triangles (n-2,
- where n is the number of vertices). You can now find the centroid of each of
- these individual triangles quite easily. This gives you a list of "centers
- of gravity" for the triangles, which together, comprise the "weight" of the
- entire polygon. So, (he says off-handedly) all that's left is to find the
- center of gravity for these centroids. (Here's where the college physics
- comes in: you can treat the mass of an object AS IF the entire mass exists
- as a point at the mass's center of gravity. Hence, why I think this scheme,
- or something similar will work here.)
-
- Here's where I'm not quite as certain of the accuracy, but it LOOKS right on
- the couple of test cases I tried.
-
- Take the center of gravity for one of the triangles on the "outside", to
- either side of the "central" vertex (from which you drew all the lines in the
- first part). Now, draw a line from it to the other "outside" triangle on the
- other side of the "central" vertex. Then, use then next vertex in (from the
- second "outside" center of gravity) as the third point of a triangle, and
- draw the other two sides of this first "center of gravity" triangle. Now use
- that "third point" from the previous sentence with the
- first "outside" center of gravity point and the next center of gravity point
- over, to form the next triangle. Just keep adding one more centroid to form
- each successive triangle until you run out of centroids. You now have a new
- set of triangles, but one less than you did before. You now find the
- centroids for each of these triangles. Repeat until you're down to one
- triangle, and it's centroid is the centroid for the entire polygon.
-
- Now, in all likelihood, that's all wrong, the centroid is elsewhere, and
- there's an easier way of finding the center of gravity for a collection of
- masses (the centroids for the FIRST set of triangles). I don't remember THAT
- much of my college physics.
-
- Anyway, maybe that will help some, and thanks for the puzzler.
-
- Everett
- ...........................................................................
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 173677
- To: John M. Dlugosz 70007,4657 (X) Date: 01-Jun-92 13:06:53
-
- Oddly, I've found that the centroid (or something approximating it) tends to
- work better in a lot of real world situations than the minimum or maximum z,
- though this may be the result of biased data. Consider a pyramid constructed
- from four triangles and a square, with the triangles coming to a point at one
- end and the square at the other end as the base. Turn off backface removal.
- (Okay, that makes the situation a tad unrealistic, but stick with me.) Use a
- simple depth sort on the maximum z. Point the tip of the pyramid more than 90
- degrees away from the viewer. Every triangle in the pyramid has the same
- maximum z, so the sort will be essentially random and hidden surfaces will
- show through. Now switch to a simple depth sort on the minimum z. Rotate the
- point of the pyramid less than 90 degrees from the viewer. Same problem. But
- use the centroid and the sort is correct in both situations. Only when the
- point of the pyramid is aimed directly at the viewer, or directly away, do
- all of the triangles have the same centroid distance -- and then it's
- irrelevant, since none of the triangles occludes any of the others.
-
- Of course, backface removal normally takes care of this, so it may not be
- entirely necessary. Still, the centroid does seem useful. Unless you can
- think of some situations that contradict these.
-
- --Chris
- ...........................................................................
-
- Fm: John M. Dlugosz 70007,4657 # 173750
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 01-Jun-92 16:39:34
-
- I think I see your point. The min or max z sorting does not work right when
- polygons share commen edges, in particular. I discovered this a few months
- ago. Using _any_ interior point of the z extent would overcome this. I
- think I'll try sorting on the midpoint of z.
-
- BTW, how do you handle the 5th test of the painter's algorythm?
-
- --John
- ...........................................................................
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 173768
- To: John M. Dlugosz 70007,4657 (X) Date: 01-Jun-92 17:37:09
-
- I'm about to try a method that does away with extents and overlap tests
- altogether, which may or may not work. I'll sort on the distance of the
- center of the extent, calculated as:
-
- CENTER_X*CENTER_X+CENTER_Y*CENTER_Y+CENTER_Z*CENTER_Z
-
- After sorting, I'll draw the polygons, without further swaps. If the scenery
- is designed correctly, and backface removal is performed before the depth
- sorts, pathological cases should only arise occasionally during animation.
- With luck, they'll only arise so briefly that nobody will notice.
-
- The trouble with checking overlapping extents is that you run into a
- combinatorial explosion. You have to compare the extents of every polygon
- with every other polygon, so that the number of comparisons equals the number of
- polygons _squared_. Put too many polygons in a scene and even brief extent
- comparisons begin to overload the system and slow down the frame rate, never
- mind the additional tests when the extents overlap.
-
- If it looks like crap, of course, I'll be re-implementing the extent checks
- and the five tests. *Groan*
-
- --Chris
- ...........................................................................
-
- Fm: John M. Dlugosz 70007,4657 # 173796
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 01-Jun-92 18:50:43
-
- I don't get it. The center's distance from the origin (in eye co-ordinates I
- take it) will not define a sort order. That is just the initial ordering,
- and I don't see how that is better than just looking at the Z distance.
- Picture an object that is up in the corner (so x and y are large) and z is
- some value. Now compare that with an object centered at x,y but has a large
- Z value, but still sorts smaller because x and y are zero. You will draw
- that one second even though it is behind the large near object in the corner.
-
- In general, polygons can overlap in interesting ways. The only way to tell
- which is in front is by testing a point that is common to the projections of
- both and seeing which is nearer. The first 4 tests of the painter's
- algorythm have smaller big-oh's which is why they are there. But they don't
- always work.
-
- Testing the extents is one of the first tests. that is cheap. And you don't
- test everyone against everyone else-- only a pair of neighboring polygons
- after the initial sorting.
-
- Anyway, I plan on writing mine tonight. Was just wondering about test 5.
- I'll do that one only, first, to make sure it works right. Then I'll add the
- others to speed it up. The hard part is finding a point common to both
- projections.
-
- --John
-
- ____________________________ Subj: Polygon Stuff ____________________________
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 173876
- To: John M. Dlugosz 70007,4657 (X) Date: 01-Jun-92 20:36:46
-
- I'm experimenting with several methods of sorting to see which produces the
- best result. Using the true distance from the origin rather than the z
- coordinate may or may not prove superior; I'll decide after I've played with
- it a bit.
-
- Hmm, are you sure that you can get away with only comparing neighboring
- polygons in the sorted list? It's possible for polygons quite distant from
- one another in the list to overlap in extent. But perhaps at least one of the
- overlapping polygons would also overlap the extents of the polygons in
- between the two, so that you could make multiple passes through the list,
- bringing the overlapping polygons closer and closer together, bubble-sort
- fashion, until no more swaps are necessary. I'll have to think about that.
- (Or is that what you had in mind all along?)
-
- --Chris
- ...........................................................................
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 174051
- To: John M. Dlugosz 70007,4657 (X) Date: 02-Jun-92 10:33:00
-
- >If the painter's algorythm makes you exchange two items, you then re-test in
- the new position.
-
- Wait a minute, does that work? Consider the following case:
-
-
- /
- /
- /
- /
- 1 /
- / /
- 2 / /
- / /
- / /
- /
- /
- /
- /
- / /
- 3 / /
- / /
- / /
- /
- /
- /
- /
- / ^
- |
- |
- |
- Viewer
-
-
- If these three polygons (seen edge on from above, looking down the y axis,
- with the viewer at the bottom) are sorted by maximum z, they'll end up sorted
- in the numeric order seen here. Assume that all three polygons have
- overlapping y extents. Polygon 1's z extent overlaps the other two polygons.
- Polygon 3 should be occluded by 1, but will be drawn over top of 1. The
- solution is to swap 3 and 1. Yet if only adjacent pairs are swapped, no swaps
- will occur, because neither 1 and 2 nor 2 and 3 overlap in the x extent. Or
- are polygons swapped merely because they overlap in the z extent, for
- purposes of further comparison?
-
- --Chris
- ...........................................................................
-
- Fm: John M. Dlugosz 70007,4657 # 174145
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 02-Jun-92 14:24:57
-
- According to Foley,
-
- "let the polygon at the far end of the sorted list of polygons be called P.
- Before it is scan converted, it must be tested against each polygon Q whose z
- extend overlaps P, to prove that P cannot obscure P. ... as soon as one
- succeeds, P has been show not to obsucre P.
-
- OK, so sort by max Z. Then start with 1, and compare it with each element in
- the list whose extent overlaps it. Those will be the following n objects,
- until the max z of an object is less than the min z of the current object.
-
- If it fails one of the tests, than (assuming the polygons don't have cyclic
- overlaps), reposiiton P to just after the one that overlaps it and start over
- with the new top of the list. (the full way is to split the polygon)
-
- There is more to it... new versions of tests 3 and 4 after swapping the
- polygons, to find out if they can be simply reversed or need to be split.
-
- --John
- ...........................................................................
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 174156
- To: John M. Dlugosz 70007,4657 (X) Date: 02-Jun-92 15:09:17
-
- Incidentally, I'm having a little trouble envisioning how one avoids going
- into an infinite loop if cyclic overlap occurs, given that one isn't
- splitting polygons. Foley suggests setting a flag to prevent two polygons
- from being swapped more than once, but I'm not clear as to how this flag is
- to be interpreted. As "don't swap this polygon again with the polygon
- immediately ahead of it in the list"? Or "don't swap this polygon again with
- the polygon at the head of the list"? Or are both interpretations wrong?
-
- --Chris
- ...........................................................................
-
- Fm: yngvi 76703,3046 # 173705
- To: Everett Kaser (Sherlock) 70673,1547 (X) Date: 01-Jun-92 14:13:34
-
- Right, the centroid calculation is very similar to center of mass with the
- assumption that mass is normalized to 1.
- ...........................................................................
-
- Fm: peter singer 100024,1603 # 176331
- To: John M. Dlugosz 70007,4657 (X) Date: 09-Jun-92 16:36:37
-
- Yes, there always exist a 'ballancing point'. Consider you have objects of
- different mass on the tray the 'bp' _moves_ to the object with highest mass
- at most, to the object with the 2nd highest mass at 2nd most and so on.
- Calculating something like 'distance' times 'mass of the object'. Consider
- you have 3 objects named A, B, C on 2D axis system. The 3 Objects have the
- masses A_, B_ and C_, the Ballancing Point is named Bp and have the coords
- (Bpx,Bpy).
-
- Y
- :
- :
- : C(10,6)
- :
- :
- :
- : A(4,2)
- : B(21,1)
- :.....................................X
-
- Bpy=(A_*2+B_*1+C*6)/(A_+B_+C_) Bpx=(A_*4+B_*21+C_*10)/(A_+B_+C_)
-
- This can be done by any number of _static_ objects! If the axis _cross_
- through the objects you have to change the sign.
-
- The easiest way to ballance a tray is to place the haviest object in the
- center. Hope this helps... PSi.
- ...........................................................................
-
- Fm: John M. Dlugosz 70007,4657 # 176424
- To: peter singer 100024,1603 (X) Date: 09-Jun-92 21:18:08
-
- I picked weights C=10, A=15, and B=20. After finding the point via your
- formula (150/45 , 12), I found the sum of the cross products between the
- weights (vector is [0 0 weight]) and the vector from the ballance point (p -
- [150/45,12,0]). I got a sum of [-430, -430, 0]. I was expecting all zeros.
- What is the significance of this?
-
- --John
- ...........................................................................
-
- Fm: peter singer 100024,1603 # 176677
- To: John M. Dlugosz 70007,4657 (X) Date: 10-Jun-92 16:32:26
-
-
- >I picked weights C=10, A=15, and B=20. After finding the point via your
- >formula (150/45 , 12), I found the sum of the cross products between the
- ^^^^^^^????? >weights (vector is [0 0 weight]) and the vector from
- the ballance point (p - >[150/45,12,0]). I got a sum of [-430, -430, 0]. I
- was expecting all zeros.
-
- You missed something...there are no vectorisations! I think you can also
- calculate this way the point of balance og a polygon using for the corner a
- weight of 1 and the corner coords.
-
- Just simply calculation:
-
- A+B+C= 45 Bpy= (15*2+20*1+10*6)/45= 2.44444444444444444 Bpx=
- (15*4+20*21+10*10)/45= 12.88888888888888888
-
- If you now lay the axis through Bpx/Bpy you get following coords:
- A(8.89/-0.44) B(8.11/-1.44) C(-2.89/3.56)
-
- Do now calculation of the weigths:
-
- for the x-axis: 15*(-0.44)+20*(-1.44)+10*3.56 = 0.2 --> 0! rounding diff.
- y-axis: 15*(-8.89)+20*8.11+10*(-2.89) = 0.05 --> 0! rounding diff.
-
- Is the fog away??? PSi.
- ...........................................................................
-
- Fm: peter singer 100024,1603 # 176676
- To: John M. Dlugosz 70007,4657 (X) Date: 10-Jun-92 16:32:12
-
- Yes, there always exist a 'ballancing point'. Consider you have objects of
- different mass on the tray the 'bp' _moves_ to the object with highest mass
- at most, to the object with the 2nd highest mass at 2nd most and so on.
- Calculating something like 'distance' times 'mass of the object'. Consider
- you have 3 objects named A, B, C on 2D axis system. The 3 Objects have the
- masses A_, B_ and C_, the Ballancing Point is named Bp and have the coords
- (Bpx,Bpy).
-
- Y
- :
- :
- : C(10,6)
- :
- :
- :
- : A(4,2)
- : B(21,1)
- :.....................................X
-
- Bpy=(A_*2+B_*1+C*6)/(A_+B_+C_) Bpx=(A_*4+B_*21+C_*10)/(A_+B_+C_)
-
- This can be done by any number of _static_ objects! If the axis _cross_
- through the objects you have to change the sign.
-
- The easiest way to ballance a tray is to place the haviest object in the
- center. Hope this helps... PSi.
-
- _________________________ Subj: Texture Mapping _________________________
-
- Fm: KGliner 70363,3672 # 343391
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 28-Apr-93 12:52:53
-
- John- what's there to say about texture-mapping? Wolfenstein and Ultima
- Underworld. The former uses a ray-casting approach, and the latter is
- polygon based. Using polygons is probably an easier approach for a graphics
- library: given a 4-sided convex polygon, map a square bitmap onto it (ie.
- where the four corners of the bitmap are stretched to fit into the four
- corners of the polygon). Where it gets complicated (especially with multiple
- polygons) is that you never want to draw a pixel to the same location twice
- (or as infrequently as possible). And even then you don't want to be
- executing more than about 5 to 8 assembly language instructions per pixel
- (and we're talking mov and add only). Any graphics library that could pull
- this off with any semblance of speed I'd buy in a heartbeat (as would many
- others, I imagine).
-
- Kevin
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 343879
- To: KGliner 70363,3672 (X) Date: 29-Apr-93 00:17:13
-
- Texture mapping as you describe sounds like a transformation applied to a
- bitmap. My own efforts in this area are designed to produce high quality
- results. For mapping, you want fast results.
-
- Mapping a bitmap to a polygon, you can have to stretch it or shrink it. In
- the former, one bitmap pixel mapps to several target pixels. In the latter,
- some bitmap pixels go unused. Is that what you have in mind?
-
- Anyone here want to discuss techniques? Let's start with theory.
-
- --John
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 344108
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 29-Apr-93 11:40:10
-
- I've written similar code for a client in the image processing segment. I
- used a technique similar to Abrash' in Dr Dobbs. The major stress was on
- quality and it works with 24-bit Truecolor bitmaps at 1024x768 resolution, so
- it may not apply to game design, in terms of speed<g>.
-
- When scan converting a polygon, setup four bresenhams (or similar
- line-walkers), two for the polygon edges of the scan converter and two
- corresponding ones in the texture space. The ones in the texture space using
- the step amount of the screen polygon walkers. Then at each scanline, setup a
- new line-walker that walks across the texture map. It must use the same
- amount of steps as the two edge-walkers in screen space are apart. Then step
- across the polygon retrieving the color from the texture at the position
- indicated by the texture walker, and putting it at the right screen position.
- (Did anyone understand that??<g>).
-
- Let me try that in ASCII >groan<:
-
- Screen space A
- Polygon / \ c--------------a
- / \ | |
- / \ | |
- / \ | |
- / B d--------------b
- / /
- C-- / Bitmap to map
- ---- / onto Polygon
- --- D
-
- Mapping the bitmap abcd onto the polygon ABCD: Setup linewalker along AC and
- AB to do the major stepping along y axis. Setup linewalker along ac, using
- same number of steps that AC is using. Setup linewalker along ab, using same
- number of steps as AB.
- In loop of scanlines, step along AC, AB (for screen coords), and along ac and
- ab for texture coords. Setup a linewalker for the texture space that goes
- from current ac position to current ab position using distance in x that AB
- and AC are apart, for number of steps. Retrieve pixels along this linewalker
- and place in consecutive x positions on screen.
-
- I hope this is understandable<g>. The implementation was written using Fixed
- Point arithmetic and is reasonably fast, although much too slow for games, I
- should think.
-
- This approach does not take into account any perspective foreshortening,
- though, since it uses linear mapping across the polygon.
-
- I'd be interested in other methods and theories behind texture mapping, too.
-
- - Lutz
- ...........................................................................
-
- Fm: KGliner 70363,3672 # 344272
- To: Lutz Kretzschmar 100023,2006 (X) Date: 29-Apr-93 16:19:28
-
- Lutz- I used a similar technique a while back, only I used linewalkers along
- opposite edges as opposed to adjacent edges. I did try a method using
- adjacent edges that had it drawing only vertical lines in the polygon, but I
- got distortion every time the linewalkers hit a corner. The whole reason for
- trying to draw vertical lines in the polygon is that it both speeds things up
- (not to mention you only draw one pixel per location on the screen) and makes
- it easier to do hidden surface removal too.
-
- Kevin
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 344303
- To: KGliner 70363,3672 (X) Date: 29-Apr-93 17:13:41
-
- Kevin,
-
- > only I used linewalkers along opposite edges as opposed to adjacent
- > edges.
- Huh? How does that work? If the polygon in my last message sits in screen
- space the way it's drawn, then how can you scan convert it by going along
- opposite edges? Do you then do the interpolation at an angle to the screens
- coordinate system?
-
- > but I got distortion every time the linewalkers hit a corner.
- Hmm, I don't get any distortion there.
-
- > The whole reason for trying to draw vertical lines in the polygon is that
- > it both speeds things up...
- Now that again is contrary to what I have always believed<g>. All frame
- buffers I know of are best accessed horizontally, because the pixels occupy
- sequential addresses. Is this for some special mode? Or are you smarter than
- we all are?<g>
-
- > (not to mention you only draw one pixel per location on the screen)
- Hmm, the method I described does just that, since the scan converter only
- generates the pixels that the polygon itself will occupy on the screen. And
- only for those does it access the texture map.
-
- > ... and makes it easier to do hidden surface removal too.
- Can you elaborate a bit<g>? What advantage do verticals have over
- horizontals?
-
- - Lutz
- ...........................................................................
-
- Fm: KGliner 70363,3672 # 344951
- To: Lutz Kretzschmar 100023,2006 Date: 30-Apr-93 15:38:31
-
- I can see I tried to cram too much into too short a message for it to be
- clear. What I really need is a few charts, e`,ZX6kboard, multi-color graphs,
- and lots of fancy symbols.<g>
-
- Anyway, I take it you move through your destination polygon by doing one
- horizontal scanline at a time? I tried an approach like this (only using
- vertical scanlines-- more on this in a sec), only I got a distortion at the
- corners. It would appear I didn't pursue it far enough, as you evidently got
- it to work. I'd be very interested to see what you did.
-
- What I ended up doing was running a diagonal (well, non-vertical and
- non-horizontal) line through the destination polygon and horizontal lines
- through the source bitmap. This worked, but there were too many wasted
- pixels. Oh-- for this approach I also switched to using opposite edges (ie.
- run my line walkers from corner A to corner D and from corner B to corner C,
- where the order of the points in the bitmap is A-B-C-D).
-
- Now for the whole vertical line bit. I use a buffer in conventional memory
- to do all my 3d calculations on. Once I've done them all, I want to be able
- to move the whole thing to video as fast as possible (copying each horizontal
- scanline in multi-byte chunks to the card). However, before that I want to
- draw to the buffer using vertical lines because: A) most dungeon crawl games
- are only concerned with yaw rotation, making it possible to do fast
- optimizations for any surfaces that are 'vertical' (ie. the walls in both
- Wolfentstein and Ultima Underworld...this doesn't speed up or slow down the
- other surfaces [like floors]); B) each pixel has to be addressed
- individually anyway (so word or dword writes are pointless), which makes the
- amount of the increment irrelevant (incrementing by the width of the screen
- takes just as long as incrementing by 1); and C) it helps with hidden surface
- removal (a whole discussion in itself that I'll hold off on for now).
- ...........................................................................
-
- Fm: E. Pinnell [CyberSim] 70031,435 # 344801
- To: KGliner 70363,3672 (X) Date: 30-Apr-93 09:46:34
-
- Kevin,
-
- I assumed that horizontal writes are faster, since you can write multiple
- pixels at a time by using a 16 bit or 32 bit MOVE.
-
- Eric Pinnell
- ...........................................................................
-
- Fm: KGliner 70363,3672 # 344953
- To: E. Pinnell [CyberSim] 70031,435 (X) Date: 30-Apr-93 15:38:42
-
- Eric- normally, yes, but since you have to address each byte individually
- when texture mapping, 16 and 32 bit moves don't really help.
-
- Kevin
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 344356
- To: Lutz Kretzschmar 100023,2006 (X) Date: 29-Apr-93 18:37:59
-
- The line walkers sounds like what I was thinking of for simple magnification
- and reduction. It would tell you which rows and colums to doplicate (or
- remove). Stretching and not just scaling is more difficult.
-
- I don't really follow your method. Does it not provide for arbitrary
- stretching, or only for rotation and magnification?
-
- --John
- ...........................................................................
-
- Fm: Hans Peter Rushworth 100031,473 # 344467
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 29-Apr-93 20:33:09
-
- John,
-
- That program I sent you performs exactly the image mapping operation you are
- talking about. Try it out, and, if this does what you want, I'm willing to
- explain the method I used. The only difference is that it will Gouraud shade
- (and now dither) the bitmap as well as just "map" it.
-
- The main limitiation is that it does not deal with perspective
- foreshortening. This is because it essentially is a 2D mapping which is
- performed after a 3D perspective transformation.
-
- The method is very simple, but is quite hard to explain in a short message.
-
- Peter.
- ...........................................................................
-
- Fm: KGliner 70363,3672 # 344273
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 29-Apr-93 16:19:34
-
- John- yes, that's more or less what I have in mind. The biggest key is to
- remove the waste-- ie. you never want to draw more than one pixel to the same
- location on the screen. How this is done with non-vertical/ non-horizontal
- line draws I don't know....
-
- Kevin
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 344357
- To: KGliner 70363,3672 (X) Date: 29-Apr-93 18:38:08
-
- I think the best method would be to scan over the dest shape, one pixel at a
- time. Each pixel gets a value from the source. This guerentees you get
- every pixel exactly once, and you arrange for a nice order for them, too.
- Mapping each dest location into the bitmap location is the tricky part. I
- imagine an efficient incremental algorithm can be generated.
-
- That is, as I move left-to-right in the screen image (which looks like a
- distorted diamond or kite perhaps), I need to move diagonally in the texture
- map, as well as change the rate of motion through it. So I have 4 constants,
- dx, dy, ddx, and ddy. I take the current x,y pixel, then do x+=dx; dx+=ddx;
- and likewise for the y's. The hard part is coming up with the constants,
- which is done once per row, and even _those_ might be generated incrementally
- from the previous row.
-
- What do you think?
-
- --John
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 345417
- To: KGliner 70363,3672 (X) Date: 01-May-93 08:49:29
-
- I had to a lot of work on getting the results to be of good quality. I used
- code from Graphics Gems I for scan converting polygons, it handled the
- corners pretty well. I never got any distortion that way and I can't really
- imagine where it came from. What did the distortion look like?
-
- > What I ended up doing was running a diagonal ) line through the
- > destination polygon and horizontal lines through the source bitmap.
- Wow. Surely this would require a special linewalker for the diagonals, to
- cover every pixel. And the linewalker of the destination map would not be
- stepping in single step increments.
-
- I guess what I'm trying to say is that the polygon linewalker is the
- *controlling* walker that defines how many steps the bitmap walker should
- take. Doing it the other way round may do polygon pixels more than once (if
- the imagemap is bigger than the polygon) or skip pixels (if it's smaller).
-
- > Oh-- for this approach I also switched to using opposite edges (ie. run my
- > line walkers from corner A to corner D and from corner B to corner C
- I think I understand what you mean. Naturally one always uses opposite edges,
- but in my case I'm going through a polygon in screen space, where adjacent
- edges can make a scanline, whereas you were going through a
- coordinate-system-adjusted bitmap (i.e. unrotated). This would require using
- AD and BC in your example.
-
- You're right about vertical lines, but on the hardware I was using (not VGA)
- there was a *much* larger calculation overhead for the next scanline.
- Horizontal scan-conversion just seems more natural to me, I guess<g>. But if
- it's better for vertical stuff, that'd be better.
-
- If you ever discuss how this relates to hidden surface removal, let me
- know<g>. I would guess you could interpolate the distance on a vertical line
- as it approaches the eye. Whereas horizontal approached would need a true
- Z-buffer approach. Hmm, but then so would vertical lines. OK, maybe not<g>.
-
- - Lutz
- ...........................................................................
-
- Fm: KGliner 70363,3672 # 346768
- To: Lutz Kretzschmar 100023,2006 (X) Date: 03-May-93 01:05:10
-
- Lutz- the problems you raise regarding the method I used for texture mapping
- were among the many reasons I abandoned that approach (and, more or less
- shelved it for a future product). I ended up doing texture mapping only to
- polygons whose left and right edges were vertical (a la Wolfenstein 3d),
- something that was suitable for my purposes and much more straightforward to
- code.
-
- I was going to talk about the hidden-surface removal stuff tonight, but
- I'm too tired to explain it clearly. I'll try to post something about it
- tomorrow.
-
- Kevin
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 345416
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 01-May-93 08:49:16
-
- John,
-
- the way I did it was to assign a coordinate system for the texture map to the
- polygon (I always convert four-sided polygons). So, for example, (using my
- ASCII drawing) the point A is (0,0), B is (1,0), C is (0,1) and D is (1,1).
- Setting up the line-walkers is simple interpolation, right? Only special
- thing about the linewalkers is that they always do a step in Y. This maps the
- bitmap to the polygon 1 to 1. I then put a 3x3 matrix transform between the
- mapping of the polygon points and the texture points. So, if translating by
- (0.5,0) A will be at (0.5,0), B at (0.5,1) etc. The matrix is built by
- concatenating translation, rotation or scaling as in normal 3D work. Then
- transform the points (of the texture map (0..1)) at the corners through the
- matrix and use these values to interpolate the starting conditions of the
- linewalkers.
-
- For the scan-conversion, I used an adapted version of the code in Graphics
- Gems I (pg. 76 "Fast anti-aliasing polygon scan conversion", by Jack C.
- Morrison), since I needed to do anti-aliasing, too (like I said, high quality
- had priority over speed).
-
- - Lutz
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 345473
- To: Lutz Kretzschmar 100023,2006 (X) Date: 01-May-93 10:43:09
-
- That is pretty much was I was doodling yesterday. Given 4 sides, can find
- the proportional position along the sides and use that to draw a line through
- the source bitmap. However, I don't think that will show perspective right.
- I'll see how it turns out.
-
- --John
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 346134
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 02-May-93 10:10:50
-
- John,
-
- yes, that won't work for perspective. I have no idea how to compensate for
- that, though.
-
- - Lutz
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 346224
- To: Lutz Kretzschmar 100023,2006 (X) Date: 02-May-93 12:18:12
-
- I do. The method allows for a quadradic in both x and y sides of the
- parametric eqn. However, the hard part is coming up with all the parameters.
-
- BTW, I've got a _very_ elegant way of using image maps, using features in the
- new version of ViewPoint. I can better customize the polygon drawing engine
- on the fly, and can insert the texture mapping engine so it is called when a
- filled-polygon is being scan-converted. Isn't OOP wonderful?
-
- --John
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 346845
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 03-May-93 05:34:23
-
- Is the relationship really quadratic? Any way to use lookup tables for this
- to speed things up? I'd be interested in what kind of speed you're getting.
-
- > Isn't OOP wonderful?
- Yup, I'll never turn back<g>.
-
- - Lutz
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 347261
- To: Lutz Kretzschmar 100023,2006 Date: 03-May-93 20:13:29
-
- The simple mapping I'm planning (if I _ever_ get to coding it) is linear.
- Just a constant delta-x and delta-y, with about 4 lines of code to set it up.
-
- The second degree will be used to provide perspective and other effects. But
- I have no idea how to set it up! The simple way I'm going to do will match
- the way it will probably be used: Draw a rectangle in a transformed
- coordinate system and it comes out rotated, scaled, and skewed into some
- arbitrary quadralateral. However, the scaling will be uniform. When
- scan-converting the resulting shape, it will use the original scale object
- that was used to produce the shape to unscale the endpoints of the scan line,
- which will map to points on the edge of the texture square (same size as the
- logical square I drew). Back in the object's original coordinate system now,
- find chage in x and y between the two points, and divide that by the number
- of steps (the number of pixels in the scan-coverted row). Then step through
- the pattern at this angle and record the pixels in a buffer which is then PUT
- on the screen.
-
- Easier to code than explain.
-
- ViewPoint will handle the scan-converting of the polygon, and call upon my
- poly_fill_alg object to render the rows as it generates them.
-
- --John
- ...........................................................................
-
- Fm: Jez San @ Argonaut 72247,3661 # 346543
- To: Lutz Kretzschmar 100023,2006 (X) Date: 02-May-93 19:41:24
-
- Do you anti alias your texture maps when they are scaled up larger than 1:1,
- eg: do your textures come out blocky when large or do you do bilinear
- interpolation to smooth out the chunky pixels?
-
- -- Jez.
- ...........................................................................
-
- Fm: Lutz Kretzschmar 100023,2006 # 346846
- To: Jez San @ Argonaut 72247,3661 (X) Date: 03-May-93 05:34:27
-
- Jez,
-
- in this program, I did not do interpolation, because the imagemaps are always
- much larger than the polygons. The program is used in the textile industry to
- change the fabric of upholstery on scanned images for example. The new fabric
- is mostly either scanned or drawn and will therefore always have enough
- detail. Mostly they have too much, which is why anti-aliasing was much more
- important.
-
- - Lutz
-
- ___________________________ Subj: Filled Polygons ___________________________
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 348393
- To: Dave Roach 71333,706 (X) Date: 05-May-93 11:32:46
-
- I suspect your diagram was reformatted by CIS, but I think you're describing
- a concave polygon (i.e. one with an internal angle greater than 180 degrees,
- creating an incursion into the interior). And, no, the algorithm doesn't work
- with concave polygons, only convex ones. But all concave polygons can be
- constructed from some combination of convex polygons, just as all polygons
- can be constructed from triangles (which are, of course, convex polygons). On
- the other hand, it works with some pretty complex convex polygons. I find
- that most concave polygons can be constructed from a very small number of
- n-sided convex polygons.
-
- If you've got a method for drawing fast concave polygons, I'd love to see it,
- though I gather it's proprietary. (Too bad!) I figured the speed gained by
- restricting the drawing code to convex polygons would offset the cost of
- decomposing concave polygons into convex ones.
-
- --Chris
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 348598
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 05-May-93 17:53:52
-
- I've profiled it, and found at the time that the cost of supporting standard
- polygons and not just convex ones is small. I don't remember the details.
- The cost of advancing each edge and testing for doneness is done for each
- edge, and is not any worse for a single edge. SO they are slower to draw
- because they have more edges, but are not slower to support having 2*n edges
- instead of just 2.
-
- --John
- ...........................................................................
-
- Fm: Chris Lampton [GAMPUB] 76711,301 # 348849
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 05-May-93 22:54:03
-
- One of these days I'll try my hand at a polygon fill routine that handles
- concave polygons to see how fast I can make it. Sounds like a bear to code,
- though.
-
- --Chris
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 349335
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 06-May-93 18:56:32
-
- No, it is quite simple. So much so that it was not worth making a special
- function for simpler cases once I had the "do everything" version.
-
- for each scanline:
- remove old edges (a for() plus 4 lines in its body)
- add new edges (a for() plus 2 lines in its body)
- draw between pairs of edges (2 lines, only because I support filling
- in two different ways. So it is an if/else)
- update edges for next scanline (a for() plus 2 lines in the body)
-
- How different is that from a one-region function? My list of active edges
- can have more than 2 items, that's all. Everything else-- testing to see if
- an edge needs to be added or removed _here_, incrementing the edges, drawing
- a horizontal row, etc. is the same. This loop is 30 lines long, and includes
- a lot of vertical whitespace and unnecessary comments. From the analasis
- above, the only real effect is each step is embedded in a for() loop. For 2
- edges always, you would just have the statement repeated twice, one for each
- edge. For an arbitrary number of edges, I loop instead.
-
-
- --John
- ...........................................................................
-
- Fm: Rasputin 71005,2524 # 349644
- To: John Dlugosz [ViewPoint] 70007,4657 (X) Date: 07-May-93 01:08:49
-
- I'm pretty much lost on your pseudocode... I call the function with a pointer
- to a list of points (sound logical eh?), and the function calculates the line
- points between each successive point and stores them in an array... then they
- get sorted by their y component, checking
- for duplicate y values... then after they are sorted I use rep stosw to fill
- the line left to right... I am wondering how one would go about integrating
- and eliminating the sort function by just comparing the endpoints... I'm sure
- that it's extremely easy, but it just seems to elude me... I always end up
- with a case where the points are in come orientation that I hadn't thought
- of! I figured that someone on here might have some code that they would like
- to share... In return I'll upload a cool map maker with the source... any
- takers?!?!??
- Entropy
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 350069
- To: Rasputin 71005,2524 Date: 07-May-93 18:37:05
-
- Instead of computing all the individual points in the line, the edge
- generates each point as needed. The edges, not every single point, is sorted
- by the y value of its upper point.
-
- What kind of map maker?
-
- --John
- ...........................................................................
-
- Fm: Dave Roach 71333,706 # 348968
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 06-May-93 06:19:51
-
- There really isn't that much of a speed decrease, my routine requires an
- overhead of about 5k to get its optimal speed. By the way I'm still looking
- for a routine that can take verticies that make up a concave poly line
- drawing and giving me the outline... Any takers... <g>
-
-
- Dave
- ...........................................................................
-
- Fm: Quentin Correll 70233,2264 # 349185
- To: Dave Roach 71333,706 (X) Date: 06-May-93 14:40:50
-
- Hi Dave,
-
- >> By the way I'm still looking for a routine that can take verticies that
- make up a concave poly line drawing and giving me the outline... Any
- takers... <g><<
-
- Haven't we been here before?... ;-)
-
- This seems to be a slightly different statement of the problem we discussed
- earlier. When last you and I chatted I had the impression that you weren't
- able to determine a "feature edge" to even arrive at the vertices. Have
- you made some progress? How do you extract the vertices?
-
- Q
- ...........................................................................
-
- Fm: John Dlugosz [ViewPoint] 70007,4657 # 348597
- To: Dave Roach 71333,706 (X) Date: 05-May-93 17:53:47
-
- Your picture came out reformatted.
-
- The only difference to handle complex polys is to make sure the array of
- edges is still sorted after you increment all the edges. It is not a major
- step-- after all, they had to be sorted once when you started, so the code is
- in there.
-
- The difference between h-convex polys and standard polys (standard are
- allowed to be concave but don't self-intersect) is more significant. You have
- a single left edge list and a right edge list, and don't need to maintain
- multiple "fingers".
-
- A triangle is even simpler yet-- only one edge change at most.
-
- --John
- ...........................................................................
-
- Fm: Rasputin 71005,2524 # 348909
- To: Chris Lampton [GAMPUB] 76711,301 (X) Date: 06-May-93 01:07:01
-
- Okay, I think I follow your description of how to do it. Now comes the part
- of actually trying to put it into code... 3d graphics definatly makes for
- interesting algorithms. thanks again for your help.
-
- ___________________________ Subj: Ghourad Shading ___________________________
-
- Fm: Ed Goldman 72630,2763 # 364901
- To: all Date: 29-May-93 12:11:49
-
- I've been thinking about how to go about Ghourad Shading a polygon surface.
- I've got some ideas on this and questions regarding the vector math (I'm very
- rusty on the math -- haven't used it since high school).
-
- It seems to me that the key to doing Ghourad Shading, and the most time
- consuming, is to obtain the unit vector normal for each vertix in the object.
- Once you have that it looks like calculating the intensity of each pixel
- within the polygon in the drawing function is fairly straightforward. It
- looks like the best way to handle this is to calculate a point in object
- coordinates which represents this unit normal for each vertix when the object
- is loaded and initialized. These points go thru the same translations as the
- vertices when going from object->world->view. This may double the number of
- points that must be translated per frame, but the alternative of calculating
- the unit vertix normal on the fly seems much worse.
-
- Now the math. The vertix normal is the average of all polygon normals that
- the vertix is part of. So the first step is to find each face the vertix
- sits on and calculate the normal for that face. The normal for each face is
- the cross-product of any 2 vector edges on the face radiating from the same
- point -- I think I understand the math here. Now here's where I'm not
- entirely sure of the math. If the vertix is on 3 faces whose normal vectors
- are A, B, and C. To get the vertix normal (N) do:
-
- N.x = (A.x + B.x + C.x)/3; N.y = (A.y + B.y + C.y)/3; N.z = (A.z + B.z +
- C.z)/3;
-
- (Is this how vectors average?)
-
- Now scale this to a unit normal and find the point in object space that
- represents this:
-
- length = sqrt( N.x*N.x + N.y*N.y + N.z*N.z);
- UNormPoint.x = ObjectVert.x +
- N.x/length; /* x component distance from vertix */
- UNormPoint.y = ObjectVert.y +
- N.y/length; /* y component distance from vertix */
- UNormPoint.z = ObjectVert.z +
- N.z/length; /* z component distance from vertix */
-
- Does this all hang together? Are my basic assumptions about Ghourad shading
- correct? All comments appreciated ....
-
- -edg-
- ...........................................................................
-
- Fm: Hans Peter Rushworth 100031,473 # 365432
- To: Ed Goldman 72630,2763 (X) Date: 30-May-93 10:42:25
-
- Couple of points Ed,
-
- For Gouraud shading *static* objects, you are really only concerned with the
- vertex intensities (which you calculate from the average of the vertex
- normals). If the light source is fixed wrt the surface, (say a mountain face
- lit by the sun), then you don't need to re-evaluate the vertex intensities
- every frame. Whatever your viewing position, the intensity will be the same.
-
- For moving objects you don't *need* to recalculate the surface normals, just
- the vertex normal. This you do by transforming the original vertex normal
- with the rotational 3x3 part of the viewing transformation matrix. If you do
- that you don't have to re-normalise the normal, so all you are left with is
- the calculation of the vertex intensity, which is a simple dot product of the
- normalised light source vector: In other words
-
- intensity = light.x*(tm[0][0]*ovn.x + tm[0][1]*ovn.y + tm[0][2]*ovn.z) +
- light.y*(tm[1][0]*ovn.x + tm[1][1]*ovn.y + tm[1][2]*ovn.z) +
- light.z*(tm[2][0]*ovn.x + tm[2][1]*ovn.y + tm[2][2]*ovn.z);
-
- where "light" is the normalised light source vector
- "ovn" is the original vertex normal (IOW the normal you get
- for the untransformed object (calculated once))
- "tm" is the rotation part of the matrix (no scaling terms)
-
- To precalculate the ovn, you proceed as you suggested. remember that a vector
- indicates direction only - you don't need to add your ObjectVert to the
- normalised vector - doing so will give an incorrect result (assuming the
- light source is very far away). I'll summarise this in case others are
- interested:
-
- surface normal = normalised cross product of 2 vectors along 2 edges of
- the polygon (v0,v1,v2 are position vectors):
-
- sn.x = (v1.y - v0.y)*(v2.z - v1.z) - (v1.z - v0.z)*(v2.y - v1.y);
- sn.y = (v1.z - v0.z)*(v2.x - v1.x) - (v1.x - v0.x)*(v2.z - v1.z);
- sn.z = (v1.x - v0.z)*(v2.y - v1.y) - (v1.y - v0.y)*(v2.x - v1.x);
-
- to normalise:
- length = sqrt(sn.x*sn.x + sn.y*sn.y + sn.z*sn.z);
- sn.x /= length;
- sn.y /= length;
- sn.z /= length;
-
- vertex normal = sum of surface normals / n
- ovn.x = (sn[1].x + sn[2].x + ... sn[n].x)/n;
- ovn.y = (sn[1].y + sn[2].y + ... sn[n].y)/n;
- ovn.z = (sn[1].z + sn[2].z + ... sn[n].z]/n;
-
- Phew!
-
- Other points (for interest only)
-
- The surface normals are useful for performing backface removal (take dot
- product of the normal with the vector from the origin to any vertex on the
- polygon, and look at it's sign), and in this instance you would re-calculate
- or transform stored surface normals every frame anyway, so you may find that
- calculating the vertex normal is faster if you just perform the averaging
- step on the already transformed surface normals rather than transforming the
- previous vertex normals (fewer multiplies). Either way should work, depending
- on the organisation of your data. Remember that if a vector is normalised,
- and you use the unscaled upper-left 3x3 part of the matrix (which are 3 sets
- of normal vectors), then you don't need to normalise the product, which can
- save you un-necessary square roots and other nasties.
-
- Hope that helps
-
- Peter.
- ...........................................................................
-
- Fm: Ed Goldman 72630,2763 # 366079
- To: Hans Peter Rushworth 100031,473 (X) Date: 31-May-93 11:45:51
-
- Thanks Peter!
-
- Seems like the only real mistake I made in the math was adding the vertex
- normal x, y, and z components to the vertex points (I thought I needed a
- *point* in object space which corresponded to the vertex normal, and that
- this point would need to be translated with the matrices along with the
- object's verteces).
-
- A question about your reply: If I've already got scaled unit surface normals
- for all faces of an object built into the object structure (which I was
- already planning to do for backface removal), then getting the vertex unit
- normal is simply a matter of averaging all these normals for the faces the
- vertex sits on? I thought I'd still have to scale the vertex normal to a
- unit vector after averaging which is why I was planning on storing and
- translating vertex normals as well -- so that I'd only have to do the scaling
- of the vector, which involves a sqrt(), once up front when the object is
- loaded and init'd. If all I have to do is average surface unit normals, then
- it seems I don't have to bother maintaining and translating vertex normals in
- the object.
-
- -ed-
- ...........................................................................
-
- Fm: Hans Peter Rushworth 100031,473 # 366133
- To: Ed Goldman 72630,2763 (X) Date: 31-May-93 13:02:01
-
- Ed,
-
- >> I thought I needed a *point* in object space which corresponded to the
- vertex normal, and that this point would need to be translated with the
- matrices along with the object's verteces.
-
- That might still work, but after you have transformed the vertepr= mal point,
- you would need to subtract from the new point the transform of the point you
- orginally added to it to get vertex normal. The surface and vertex normals
- indicate *directions* and not *positions*. If you normalise a vector, any
- position information is lost, and you retain only the direction information.
-
- >> question...getting the vertex unit normal is simply a matter of averaging
- all the [surface] normals for the faces the vertex sits on?
-
- No, sorry, I made a mistake, that isn't true, you would have to normalise the
- average vector to make the vertex normal. I missed out a few steps (see
- below), and well spotted <g>. You could omit the division step during
- averaging, so the vertex vector is the *sum* of the associated surface
- normals, and the vertex normal is the vertex vector divided by the length of
- the vertex vector. but...
-
- >> If all I have to do is average surface unit normals, then it seems I don't
- have to bother maintaining and translating vertex normals in the object.
-
- If you would still like to do that, there is an easy way out! What you do
- (during initialisation) is take the initial *summed* vertex vector (before
- you would normalise it to be a vertex normal), and work out it's length. Now
- store the length (not the vertex normal) in with the object. When you come to
- work out the transformed vertex normal, sum the transformed surface normals
- and divide each component of the result by the stored length (if you prefer,
- multiply it by 1/length). That should effect the generation and normalisation
- of the vertex normal, on the fly, without a square root or extra
- transformations.
-
- (I think I got out of that one <g>)
-
- Peter.
- ...........................................................................
-
- Fm: Ed Goldman 72630,2763 # 366390
- To: Hans Peter Rushworth 100031,473 (X) Date: 31-May-93 19:07:50
-
- Peter,
-
- Thanks. I see what you're getting at, but I'm a bit confused. Are you
- saying that an average of surface normals never needs to be done to get the
- vertex normal, just a summation and div by precalc'd length?
-
- Looks like what you're saying are these steps:
-
- 1) Sum X, Y, Z components of surface normals.
- 2) get length by sqrt( X^2 + Y^2 + Z^2) where X,Y,Z are the summed values.
- Store this length with each vertex. 3) When it's time to draw the frame,
- after the rotation matrix has been applied to the normals, just sum the
- surface normals and div by stored length. This gives the vertex normal.
-
- If that's the way it works, great!
-
- One more thing, just about terminology: A surface normal, for instance, is a
- vector perpendicular to the plane. A *unit* surface normal would be vector
- where length = 1. When you say "normalize a normal" does that mean scaling a
- normal to a unit normal?
-
- I really should have paid more attention in math class <g> ....
-
- -ed-
- ...........................................................................
-
- Fm: Hans Peter Rushworth 100031,473 # 366783
- To: Ed Goldman 72630,2763 (X) Date: 01-Jun-93 09:23:16
-
- Ed,
-
- By multiplying (or dividing) a vector by a scalar you don't change it's
- direction. so the vectors (1,2,3) and (2,4,6) and (0.1,0.2,0.3) all point in
- the same direction. The vector (0,0,0) and multiplying a vector by 0 are
- special cases that must be avoided. When you average (as opposed to summing)
- vectors the result only differs in it's length, not it's direction. So for
- the average of 3 vectors, the division by 3 is superflous if all you want at
- the end is a "length 1" vector.
-
- v = (a+b+c)/3; v /= |v|; == v = (a+b+c); v /= |v|;
-
- because (v/3) / |v/3| == (v / |v|) * (3 / 3) == v / |v|
-
- I use "normalising" to imply making a vectors length equal to 1. Having all
- the vectors the *same length* is important if you intend to add them
- together, and having them all *length 1* is useful if you use the vectors in
- dot products eg:
-
- cos(theta) = a.b/|a||b|
-
- if the vectors are "length 1" already this reduces to
-
- cos(theta) = a.b/(1*1) = a.b
-
- similarly for cross products and lots of other vector algebra.
-
- Remember also that a *rotation* matrix transformation applied to a vector
- doesn't effect the length of the vector, only the direction.
-
- >> Are you saying that an average of surface normals never needs to be done
- to get the vertex normal, just a summation and div by precalc'd length?
-
- Yes, You have a set of original surface normals, you add them up and get a
- vector which is length L. However you rotate the normals, you may get a
- vector sum pointing a different direction, but the length will always be L.
-
- >>
- 1) Sum X, Y, Z components of surface normals. [this sum is a vector the
- same length however the surface normals are oriented]
- 2) get length by sqrt(X^2 + Y^2 + Z^2) where X,Y,Z are the summed
- values. Store this length with each vertex. [Now you know how to
- "normalise" the vertex normal]
- 3) When it's time to draw the frame, after the rotation matrix has been
- applied to the normals, just sum the surface normals [to get the
- direction of the vertex normal] and div by stored length. [to make
- it a "normalised" vector] This gives the vertex normal.
-
- Thats it.
-
- >> about terminology:
-
- Not sure, a normal vector might mean any length vector normal to a plane, and
- "normalising" meant to imply making a vector length 1 is perhaps bad use of
- terminology on my part. I tend to confuse the term "unit" with vectors like
- (0,1,0) which are often also described as "base" vectors (ie (1,0,0), (0,1,0)
- and (0,0,1) form a "basis" for spanning R3).
-
- >> I really should have paid more attention in math class <g> ....
-
- Well, if they applied the vector algebra to Gouraud shading and computer
- graphics, I think both of us would be much more knowledgable about it.
- Addressing enquiries like yours helps keep what little there is from getting
- too rusty to be of use. <g>
-
- Peter.
- ...........................................................................
-
- Fm: Hans Peter Rushworth 100031,473 # 365438
- To: Ed Goldman 72630,2763 (X) Date: 30-May-93 10:47:57
-
- I forgot...
-
- When you 3D clip the polygons that are to be Gouraud shaded, you must
- remember to push and possibly create interpolated vertex intensities along
- with the the vertex points. This is exactly analogous to the treatment of the
- points position components. If you fail to do this you will get strange
- results as the objects move out of the view window. (I guess that is an
- obvious point).
-
- Peter.
- ...........................................................................
-
- Fm: Steve Ringley 73727,1202 # 366142
- To: Ed Goldman 72630,2763 (X) Date: 31-May-93 13:23:47
-
- I don't know too much about it, but if you have not already, download
- LATHE.ZIP from Lib 15 of the Zenith forum. It does Ghourad shading, and the
- $30 registration includes C source code.
- ...........................................................................
-
-